"Oserror: [Errno 1] Operation Not Permitted" When Installing Scrapy in Osx 10.11 (El Capitan) (System Integrity Protection)

OSError: [Errno 1] Operation not permitted when installing Scrapy in OSX 10.11 (El Capitan) (System Integrity Protection)

I also think it's absolutely not necessary to start hacking OS X.

I was able to solve it doing a

brew install python

It seems that using the python / pip that comes with new El Capitan has some issues.

Scrapy error twisted.web._newclient.RequestGenerationFailed

I was able to get it to work.

Steps to reproduce...

Open a new directory and start a new python virtual enviornment, and update pip install scrapy and install pyinstaller into the virtual environement.

In the new directory create the the two python scripts... mine is main.py and scrape.py

main.py

import tkinter as tk
from tkinter import messagebox as tkms
from tkinter import ttk
import shlex
import os
import scrapy
from subprocess import Popen
import json

def get_path(name):
return os.path.join(os.path.dirname(__file__),name).replace("\\","/")

harvest = None

def watch():
global harvest
if harvest:
if harvest.poll() != None:
# Update your progressbar to finished.
progress_bar.stop()
#if harvest finishes OK then show confirmation message otherwise show error.
if harvest.returncode == 0:
mes = tkms.showinfo(title='progress', message='Scraping Done')
if mes == 'ok':
root.destroy()
else:
tkms.showinfo(title='Error', message=f'harvest returncode == {harvest.returncode}')

harvest = None

else:
# indicate that process is running.
progress_bar.grid()
progress_bar.start(10)
root.after(100, watch)

def scrape():
global harvest
command_line = shlex.split('scrapy runspider ' + get_url('scrape.py'))
with open ('stdout.txt', 'wb') as out, open('stderr.txt', 'wb') as err:
harvest = Popen(command_line, stdout=out, stderr=err)
watch()

root = tk.Tk()
root.title("Title")

url = tk.StringVar(root)

entry1 = tk.Entry(root, width=90, textvariable=url)
entry1.grid(row=0, column=0, columnspan=3)

my_button = tk.Button(root, text="Process", command=scrape)
my_button.grid(row=2, column=2)

progress_bar = ttk.Progressbar(root, orient=tk.HORIZONTAL, length=300, mode='indeterminate')
progress_bar.grid(row=3, column=2)
progress_bar.grid_forget()

root.mainloop()

scrape.py

import scrapy
import os

class ImgSpider(scrapy.Spider):
name = 'img'

#allowed_domains = [user_domain]
start_urls = ['https://www.bbc.com/news/in_pictures'] # i just used this for testing.

def parse(self, response):
title = response.css('img::attr(alt)').getall()
links = response.css('img::attr(src)').getall()

if not os.path.exists('./images'):
os.makedirs('./images')
with open('./images/urls.txt', 'w') as f:
for i in title:
f.write(i)
f.close
yield {"title": title, "links": links}

Then run pyinstaller -F main.py, which will then generate a main.spec file. open that and make these changes to the file.

main.spec

# -*- mode: python ; coding: utf-8 -*-

block_cipher = None
import os

scrape = "scrape.py"
imagesdir = "images"

a = Analysis(
['main.py'],
pathex=[],
binaries=[],
datas=[(scrape,'.'), (imagesdir,'.')], # add these lines
hiddenimports=[],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False,
)
pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher)

exe = EXE(
pyz,
a.scripts,
a.binaries,
a.zipfiles,
a.datas,
[],
name='main',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
upx_exclude=[],
runtime_tmpdir=None,
console=True, # Once you have confirmed it is working you can set this to false
disable_windowed_traceback=False,
argv_emulation=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None,
)

Then once that is all done. go back to your terminal and run pyinstaller main.spec, and bobs your uncle...



Update

main.py -

I essentially just removed the shlex portion and made the path to scrape.py relative to the main.py file path.

import tkinter as tk
from tkinter import messagebox as tkms
from tkinter import ttk
from subprocess import Popen
import json
import os

def get_url():
print('Getting URL...')
data = url.get()
if not os.path.exists('./data'):
os.makedirs('./data')
with open('./data/url.json', 'w') as f:
json.dump(data, f)

harvest = None

def watch():
global harvest
print('watch started')
if harvest:
if harvest.poll() != None:
print('progress bar ends')
# Update your progressbar to finished.
progress_bar.stop()
#if harvest finishes OK then show confirmation message otherwise show error.
if harvest.returncode == 0:
mes = tkms.showinfo(title='progress', message='Scraping Done')
if mes == 'ok':
root.destroy()
else:
tkms.showinfo(title='Error', message=f'harvest returncode == {harvest.returncode}')

# Maybe report harvest.returncode?
print(f'harvest return code if Poll !None =--######==== {harvest.returncode}')
print(f'harvest poll =--######==== {harvest.poll}')
# Re-schedule `watch` to be called again after 0.1 s.
harvest = None

else:
# indicate that process is running.
print('progress bar starts')
progress_bar.grid()
progress_bar.start(10)
print(f'harvest return code =--######==== {harvest.returncode}')
root.after(100, watch)

def scrape():
global harvest
scrapefile = os.path.join(os.path.dirname(__file__),'scrape.py')
# harvest = Popen(command_line)
with open ('stdout.txt', 'wb') as out, open('stderr.txt', 'wb') as err:
# harvest = Popen('scrapy runspider ./scrape.py', stdout=out, stderr=err, shell=True)
harvest = Popen(["python3", scrapefile], stdout=out, stderr=err)
out.close(), err.close()
print('harvesting started')
watch()

root = tk.Tk()
root.title("Title")

url = tk.StringVar(root)

entry1 = tk.Entry(root, width=90, textvariable=url)
entry1.grid(row=0, column=0, columnspan=3)

my_button = tk.Button(root, text="Process", command=lambda: [get_url(), scrape()])
my_button.grid(row=2, column=2)

progress_bar = ttk.Progressbar(root, orient=tk.HORIZONTAL, length=300, mode='indeterminate')
progress_bar.grid(row=3, column=2)
progress_bar.grid_forget()

root.mainloop()

main.spec

# -*- mode: python ; coding: utf-8 -*-
block_cipher = None
a = Analysis(['main.py'], pathex=[], binaries=[],
datas=[('scrape.py','.')], # <------- this is the only change that I made
hiddenimports=[], hookspath=[],
hooksconfig={}, runtime_hooks=[], excludes=[],
win_no_prefer_redirects=False, win_private_assemblies=False,
cipher=block_cipher, noarchive=False,)
pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher)
exe = EXE(pyz, a.scripts, a.binaries, a.zipfiles, a.datas, [],
name='main', debug=False, bootloader_ignore_signals=False, strip=False,
upx=True, upx_exclude=[], runtime_tmpdir=None, console=False,
disable_windowed_traceback=False, argv_emulation=False, target_arch=None,
codesign_identity=None, entitlements_file=None,)

I made no changes to the scrape.py

Operation Not Permitted when on root - El Capitan (rootless disabled)

Nvm. For anyone else having this problem you need to reboot your mac and press ⌘+R when booting up. Then go into Utilities > Terminal and type the following commands:

csrutil disable
reboot

This is a result of System Integrity Protection. More info here.

EDIT

If you know what you are doing and are used to running Linux, you should use the above solution as many of the SIP restrictions are a complete pain in the ass.

However, if you are a tinkerer/noob/"poweruser" and don't know what you are doing, this can be very dangerous and you are better off using the answer below.

error: [Errno 1] Operation not permitted: '/usr/bin/pyobfuscate' MacOS Sierra

In general: Avoid /usr/bin on current MacOS (where it's read-only)

/usr/bin isn't writable on new versions of MacOS, even as root, unless System Integrity Protection has been disabled. Consider:

sudo python setup.py install --prefix=/usr/local

Another option, which doesn't require sudo at all, is to use a virtualenv:

virtualenv ~/pyobfuscate.venv     ## create a virtualenv
. ~/pyobfuscate.venv/bin/activate ## activate that virtualenv
python setup.py install ## install pyobfuscate in that virtualenv

...and thereafter, . ~/pyobfuscate.venv/bin/activate in a given shell before running pyobfuscate in that shell.


But pyobfuscate's setup.py needs to be fixed before you can do that:

That said, current versions of pyobfuscate have their setup.py written as follows:

data_files=[('/usr/bin', ['pyobfuscate'])]

That's inappropriate, and instead, should be:

scripts=['pyobfuscate']

...which will follow the prefix given, whether via a virtualenv or a --prefix= argument.

Scrapy not installed correctly on mac?

--user option is used when you want to install a package into the local user's $HOME, e.g. on Mac it should be $HOME/Library/Python/2.7/lib/python/site-packages.

scrapy executable could be found at $HOME/Library/Python/2.7/bin/scrapy. So, you should edit your .bash_login file and modify PATH env variable:

PATH="$HOME/Library/Python/2.7/bin/:$PATH"

Or, just reinstall scrapy without --user flag.

Hope that helps.

Execute a script when alarm is triggered in McAfee ESM 11.5.X

I found a way to this. You can't execute a script situated in the ESM (In the image above, the IP is of ESM machine itself). You have to spin another VM with the required ports open. Refer image in the question.. So, the IP address, credentials and script path has to be of the VM.

Basically you should have a command server for script execution and others.

Also, in my case I noted that only bash scripts were getting executed. Python scripts failed. So, the work around to that could be that you mention bash script in ESM and inside bash script you write a command to execute the python scripts that has to be executed.

Hope it helps!



Related Topics



Leave a reply



Submit