나의 개발일지

[네이버 뉴스 요약 프로젝트] 크롤링 (Crawling) 본문

네이버 뉴스 요약 프로젝트

[네이버 뉴스 요약 프로젝트] 크롤링 (Crawling)

YoonJuHan 2023. 12. 22. 10:02
목차

프로젝트 소개 : https://study-yoon.tistory.com/224

1. 크롤링 : https://study-yoon.tistory.com/225
2. 군집화 : https://study-yoon.tistory.com/226
3. 요약 : https://study-yoon.tistory.com/227

Github : https://github.com/Yoon-juhan/naverNewsCrawling

 

🔑 threading을 사용해서 크롤링을 병렬로 처리

  • 스레드 참고 : https://blog.naver.com/nkj2001/222728316792
  • 현재 프로젝트는 시간당 카테고리별로 100개 정도의 뉴스를 수집해야 함
  • 카테고리 8개 = 총 800개 이상의 뉴스를 수집
  • 스레드를 사용하기 전 속도는 대략 13분 (VS code의 Jupyter 환경)
  • 시간을 줄이고자 스레드를 사용해서 대략 3분 30초로 시간을 줄일 수 있었다.
  • 13분 → 3분 30초로 단축

 

반복문에 붙어있는 tqdm은 현재 반복문의 진행 상황을 표시해주는 라이브러리

 

1. 카테고리 선정

  • 네이버 뉴스 : https://news.naver.com/
  • 정치, 경제, 사회, 생활/문화, 세계, IT/과학, 연예, 스포츠로 카테고리 총 8개를 선정

 

2. 선정한 카테고리들의 뉴스 url 크롤링

url 크롤링 코드

더보기
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.by import By
import pandas as pd
import numpy as np
import re
import time
import datetime
from tqdm.notebook import tqdm
import threading

options = webdriver.ChromeOptions()
options.add_argument('--headless')
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
options.add_argument('--log-level=3')  # 로그 레벨을 "INFO" 이상의 레벨로 설정
browser = webdriver.Chrome(options=options)
n = [5, 7] # 2~5, 1~7

# 기사 링크 크롤링
class UrlCrawling:
    def __init__(self):
        self.category_names = ["정치", "경제", "사회", "생활/문화", "세계", "IT/과학", "연예", "스포츠"]
        self.url_df_list = [None] * 8
        self.lock = threading.Lock()

    # "정치", "경제", "사회", "생활/문화", "세계", "IT/과학"
    def getUrl(self, category_num):
        a_tag_list = []
        urls = []
        category_list = []
        browser = webdriver.Chrome(options=options)

        url = f'https://news.naver.com/section/{category_num}'
        browser.get(url)

        # 기사 더보기 두 번 클릭
        browser.find_element(By.CLASS_NAME, "_CONTENT_LIST_LOAD_MORE_BUTTON").click()
        time.sleep(1)
        browser.find_element(By.CLASS_NAME, "_CONTENT_LIST_LOAD_MORE_BUTTON").click()
        time.sleep(1)

        soup = BeautifulSoup(browser.page_source, "html.parser")

        a_tag_list.extend(soup.select(".section_latest ._TEMPLATE .sa_thumb_link"))

        for a in a_tag_list:
            urls.append(a["href"])
            category_list.append(self.category_names[category_num-100])
        
        url_df = pd.DataFrame({'category' : category_list,
                               'url' : urls})

        with self.lock:
            self.url_df_list[category_num-100] = url_df

        browser.quit()


    # 연예
    def getEntertainmentUrl(self):
        a_tag_list = []
        urls = []
        category_list = []
        today = datetime.date.today()
        browser = webdriver.Chrome(options=options)

        for page in range(1, n[0]):  # (1, 5)
            url = f'https://entertain.naver.com/now#sid=106&date={today}&page={page}'
            browser.get(url)

            time.sleep(0.5)

            soup = BeautifulSoup(browser.page_source, "html.parser")

            a_tag_list.extend(soup.select(".news_lst li>a"))


        for a in a_tag_list:
            urls.append("https://entertain.naver.com" + a["href"])
            category_list.append("연예")

        url_df = pd.DataFrame({'category' : category_list,
                               'url' : urls})

        with self.lock:
            self.url_df_list[6] = url_df
        
        browser.quit()


    # 스포츠
    def getSportsUrl(self):      
        a_tag_list = []
        urls = []
        category_list = []
        today = str(datetime.date.today()).replace('-', '')
        browser = webdriver.Chrome(options=options)
        category = ["kfootball", "wfootball", "kbaseball", "wbaseball", "basketball", "volleyball", "golf"]

        for i in range(n[1]):  # 7
            url = f'https://sports.news.naver.com/{category[i]}/news/index?isphoto=N&date={today}&page=1'
            browser.get(url)

            time.sleep(0.5)

            soup = BeautifulSoup(browser.page_source, "html.parser")
            a_tag_list.extend(soup.select(".news_list li>a"))


        for i in range(len(a_tag_list)):
            urls.append("https://sports.news.naver.com/news" + re.search('\?.+', a_tag_list[i]["href"]).group())
            category_list.append("스포츠")

        url_df = pd.DataFrame({'category' : category_list,
                               'url' : urls})
        
        with self.lock:
            self.url_df_list[7] = url_df

        browser.quit()
        

# url 스레드
def urlThread(url_crawler):

    url_threads = []
    for category_num in range(100, 108):
        if category_num <= 105:
            url_thread = threading.Thread(target=url_crawler.getUrl, args=(category_num,))
        elif category_num == 106:
            url_thread = threading.Thread(target=url_crawler.getEntertainmentUrl)
        else:
            url_thread = threading.Thread(target=url_crawler.getSportsUrl)

        url_threads.append(url_thread)
        url_thread.start()

    for url_thread in url_threads:
        url_thread.join()

 

 

  • 카테고리마다 각각의 스레드를 생성해서 url을 수집
  • 정치, 경제, 사회, 생활/문화, 세계, IT/과학
    • url = f'https://news.naver.com/section/{category_num}'
    • category_num = 카테고리 번호를 받아서 크롤링할 페이지를 지정
    • 해당 페이지에 기사 더 보기 버튼을 두 번 클릭해서 100개 이상의 뉴스를 가져올 수 있게 함
  • 연예
    • url = f'https://entertain.naver.com/now#sid=106&date={today}&page={page}'
    • today = 오늘 날짜를 2024-02-19 형식으로 지정
    • page = 1~4로 1페이지에서 4페이지까지 반복문으로 크롤링하도록 지정
    • 한 페이지에 뉴스가 25개 있어서 4페이지 크롤링을 통해 뉴스 100개를 채움
  • 스포츠
    • category = ["kfootball", "wfootball", "kbaseball", "wbaseball", "basketball", "volleyball", "golf"]
    • url = f'https://sports.news.naver.com/{category[i]}/news/index?isphoto=N&date={today}&page=1'
    • category[i] = 스포츠 종목들을 지정
    • today = 오늘 날짜를 20240219 형식으로 지정
    • 각 종목마다 1페이지(대략 20개 뉴스)씩 크롤링 

 

3. 뉴스 본문 크롤링

본문 크롤링 코드

더보기
# 기사 본문 크롤링
class ContentCrawling:
    def __init__(self):
        self.category_names = ["정치", "경제", "사회", "생활/문화", "세계", "IT/과학", "연예", "스포츠"]
        self.title = [[] for _ in range(8)]
        self.content = [[] for _ in range(8)]
        self.img = [[] for _ in range(8)]
        self.lock = threading.Lock()

    def getContent(self, url_list, category_num):  # 정치, 경제, 사회, 생활/문화, 세계, IT/과학
        title_list = []
        content_list = []
        img_list = []
        browser = webdriver.Chrome(options=options)

        for url in tqdm(url_list, desc=f"{self.category_names[category_num]} CONTENT"):
            flag = False
            browser.get(url)
            time.sleep(0.5)
            soup = BeautifulSoup(browser.page_source, "html.parser")

            try:
                title = soup.select("#title_area span")[0]
                content = soup.find_all(attrs={"id" : "dic_area"})
                self.getImg(soup, img_list)
                flag = True

            except IndexError:
                print("삭제된 기사")
                continue
            
            if flag:
                title_list.extend(title)
                content_list.extend(self.removeTag(content))
                
        with self.lock:
            for i in range(len(title_list)):
                try:
                    self.title[category_num].append(title_list[i].text)
                    self.content[category_num].append(cleanContent(content_list[i].text))
                    self.img[category_num].append(img_list[i])
                except IndexError:
                    print(i, category_num)
                    print(content_list[i])
                    print(content[category_num])

        browser.quit()


    def getEntertainmentContent(self, url_list):    # 연예
        title_list = []
        content_list = []
        img_list = []
        browser = webdriver.Chrome(options=options)

        for url in tqdm(url_list, desc="연예 CONTENT"):
            flag = False
            browser.get(url)
            time.sleep(0.5)
            soup = BeautifulSoup(browser.page_source, "html.parser")

            try:
                title = soup.select(".end_tit")
                content = soup.find_all(attrs={"class" : "article_body"})
                self.getImg(soup, img_list)
                flag = True

            except IndexError:
                print("삭제된 기사")
                continue

            if flag:
                title_list.extend(title)
                content_list.extend(self.removeTag(content))
                
        with self.lock:
            for i in range(len(title_list)):
                self.title[6].append(title_list[i].text)
                self.content[6].append(cleanContent(content_list[i].text))
                self.img[6].append(img_list[i])

        browser.quit()

    def getSportsContent(self, url_list):   # 스포츠
        title_list = []
        content_list = []
        img_list = []
        browser = webdriver.Chrome(options=options)

        for url in tqdm(url_list, desc="스포츠 CONTENT"):
            flag = False
            browser.get(url)                                                    
            time.sleep(0.5)
            soup = BeautifulSoup(browser.page_source, "html.parser")

            try:
                title = soup.select(".news_headline .title")
                content = soup.find_all(attrs={"class" : "news_end"})
                self.getImg(soup, img_list)
                flag = True

            except IndexError:
                print("삭제된 기사")
                continue
                
            if flag:
                title_list.extend(title)
                content_list.extend(self.removeTag(content))

        with self.lock:
            for i in range(len(title_list)):
                self.title[7].append(title_list[i].text)
                self.content[7].append(cleanContent(content_list[i].text))
                self.img[7].append(img_list[i])

        browser.quit()

    # 데이터프레임 생성
    def makeDataFrame(self, all_url, category):    # 수집한 데이터를 데이터프레임으로 변환
        
        title, content, img = [], [], []
        for i in self.title:
            title.extend(i)
        for i in self.content:
            content.extend(i)
        for i in self.img:
            img.extend(i)

        data = {"category" : pd.Series(category),
                "title" : pd.Series(title),
                "content" : pd.Series(content),
                "img" : pd.Series(img),
                "url" : pd.Series(all_url)}

        news_df = pd.DataFrame(data)

        news_df.drop(news_df[news_df['content'].isna()].index, inplace=True)

        return news_df
    

    # 이미지 추출
    def getImg(self, soup, img_list):
        img_tag = soup.select(".end_photo_org img")                     # 이미지 가져오기

        if img_tag:                                                     # 이미지 있으면 이미지 주소만 추출해서 리스트로 만든다.
            img_src_list = []
            for img in img_tag:
                if len(img_src_list) <= 10:                             # 최대 이미지 10개
                    if '.gif' not in img['src']:
                        img_src_list.append(img['src'])
            img_list.append(",".join(img_src_list))
        else:
            img_list.append("")


    # 필요없는 태그 삭제
    def removeTag(self, content):

        while content[0].find("strong"): content[0].find("strong").decompose()
        while content[0].find("small"): content[0].find("small").decompose()
        while content[0].find("table"): content[0].find("table").decompose()
        while content[0].find("b"): content[0].find("b").decompose()
        while content[0].find(attrs={"class" : "end_photo_org"}): content[0].find(attrs={"class" : "end_photo_org"}).decompose()        # 본문 이미지에 있는 글자 없애기
        while content[0].find(attrs={"class" : "vod_player_wrap"}): content[0].find(attrs={"class" : "vod_player_wrap"}).decompose()    # 본문 영상에 있는 글자 없애기
        while content[0].find(attrs={"id" : "video_area"}): content[0].find(attrs={"id" : "video_area"}).decompose()                    # 본문 영상 없애기
        while content[0].find(attrs={"name" : "iframe"}): content[0].find(attrs={"name" : "iframe"}).decompose()
        while content[0].find(attrs={"class" : "image"}): content[0].find(attrs={"class" : "image"}).decompose()
        while content[0].find(attrs={"class" : "vod_area"}): content[0].find(attrs={"class" : "vod_area"}).decompose()                  # 본문 영상 없애기

        if content[0].find(attrs={"class" : "artical-btm"}): content[0].find(attrs={"class" : "artical-btm"}).decompose()               # 하단에 제보하기 칸 있으면 삭제
        if content[0].find(attrs={"class" : "caption"}): content[0].find(attrs={"class" : "caption"}).decompose()                       # 이미지 설명 없애기
        if content[0].find(attrs={"class" : "source"}): content[0].find(attrs={"class" : "source"}).decompose()
        if content[0].find(attrs={"class" : "byline"}): content[0].find(attrs={"class" : "byline"}).decompose()
        if content[0].find(attrs={"class" : "reporter_area"}): content[0].find(attrs={"class" : "reporter_area"}).decompose()
        if content[0].find(attrs={"class" : "copyright"}): content[0].find(attrs={"class" : "copyright"}).decompose()
        if content[0].find(attrs={"class" : "categorize"}): content[0].find(attrs={"class" : "categorize"}).decompose()
        if content[0].find(attrs={"class" : "promotion"}): content[0].find(attrs={"class" : "promotion"}).decompose()

        return content
        

# 본문 전처리
def cleanContent(text):

    text = re.sub('\([^)]+\)', '', text)
    text = re.sub('\[[^\]]+\]','',text)
    text = re.sub('([^\s]*\s기자)','',text)
    text = re.sub('([^\s]*\온라인 기자)','',text)
    text = re.sub('([^\s]*\s기상캐스터)','',text)
    text = re.sub('포토','',text)
    text = re.sub('\S+@[a-z.]+','',text)
    text = re.sub('[“”]','"',text)
    text = re.sub('[‘’]','\'',text)
    text = re.sub('\s{2,}',' ',text)
    text = re.sub('다\.(?=(?:[^"]*"[^"]*")*[^"]*$)', '다.\n', text)
    text = re.sub('[\t\xa0]','', text)
    text = re.sub('[ㄱ-ㅎㅏ-ㅣ]+','',text)
    text = re.sub('[=+#/^$@*※&ㆍ!』\\|\[\]\<\>`…》■□ㅁ◆◇▶◀▷◁△▽▲▼○●━]','',text)
        
    return text


# 본문 스레드
def contentThread(url_crawler, content_crawler):

    url_list = []
    all_url_list = np.array([])
    category_list = np.array([])

    for i in range(8):
        url_list.append(list(url_crawler.url_df_list[i]['url']))
        all_url_list = np.append(all_url_list, url_crawler.url_df_list[i]['url'])
        category_list = np.append(category_list, url_crawler.url_df_list[i]['category'])

    content_threads = []
    for i in range(8):
        if i <= 5:
            content_thread = threading.Thread(target=content_crawler.getContent, args=(url_list[i], i))
        elif i == 6:
            content_thread = threading.Thread(target=content_crawler.getEntertainmentContent, args=(url_list[i],))
        else:
            content_thread = threading.Thread(target=content_crawler.getSportsContent, args=(url_list[i],))
        content_threads.append(content_thread)
        content_thread.start()

    for content_thread in content_threads:
        content_thread.join()

    news_df = content_crawler.makeDataFrame(all_url_list, category_list)                      # 본문 데이터프레임 생성

    return news_df

 

  • 위에서 수집한 url을 통해 제목, 본문, 이미지를 수집
  • 마찬가지로 카테고리마다 스레드를 생성해서 크롤링
  • 이때 본문에서 필요 없는 내용이 담긴 태그를 삭제하고 내용을 전처리하는 과정을 포함
  • 내용 전처리에는 군집화에 영향을 줄만한 단어(~ 기자, ~ 기상캐스터 등)와 이메일, 특수문자 등을 제거
  • 최종적으로 카테고리, 제목, 본문, 이미지 주소, 뉴스 url을 가지는 데이터 프레임이 생성된다.

 

전체 코드

더보기
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.by import By
import pandas as pd
import numpy as np
import re
import time
import datetime
from tqdm.notebook import tqdm
import threading

options = webdriver.ChromeOptions()
options.add_argument('--headless')
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
options.add_argument('--log-level=3')  # 로그 레벨을 "INFO" 이상의 레벨로 설정
browser = webdriver.Chrome(options=options)

n = [5, 7] # 2~5, 1~7

# 기사 링크 크롤링
class UrlCrawling:
    def __init__(self):
        self.category_names = ["정치", "경제", "사회", "생활/문화", "세계", "IT/과학", "연예", "스포츠"]
        self.url_df_list = [None] * 8
        self.lock = threading.Lock()

    # "정치", "경제", "사회", "생활/문화", "세계", "IT/과학"
    def getUrl(self, category_num):
        a_tag_list = []
        urls = []
        category_list = []
        browser = webdriver.Chrome(options=options)

        url = f'https://news.naver.com/section/{category_num}'
        browser.get(url)

        # 기사 더보기 두 번 클릭
        browser.find_element(By.CLASS_NAME, "_CONTENT_LIST_LOAD_MORE_BUTTON").click()
        time.sleep(1)
        browser.find_element(By.CLASS_NAME, "_CONTENT_LIST_LOAD_MORE_BUTTON").click()
        time.sleep(1)

        soup = BeautifulSoup(browser.page_source, "html.parser")

        a_tag_list.extend(soup.select(".section_latest ._TEMPLATE .sa_thumb_link"))

        for a in a_tag_list:
            urls.append(a["href"])
            category_list.append(self.category_names[category_num-100])
        
        url_df = pd.DataFrame({'category' : category_list,
                               'url' : urls})

        with self.lock:
            self.url_df_list[category_num-100] = url_df

        browser.quit()


    # 연예
    def getEntertainmentUrl(self):
        a_tag_list = []
        urls = []
        category_list = []
        today = datetime.date.today()
        browser = webdriver.Chrome(options=options)

        for page in range(1, n[0]):  # (1, 5)
            url = f'https://entertain.naver.com/now#sid=106&date={today}&page={page}'
            browser.get(url)

            time.sleep(0.5)

            soup = BeautifulSoup(browser.page_source, "html.parser")

            a_tag_list.extend(soup.select(".news_lst li>a"))


        for a in a_tag_list:
            urls.append("https://entertain.naver.com" + a["href"])
            category_list.append("연예")

        url_df = pd.DataFrame({'category' : category_list,
                               'url' : urls})

        with self.lock:
            self.url_df_list[6] = url_df
        
        browser.quit()


    # 스포츠
    def getSportsUrl(self):      
        a_tag_list = []
        urls = []
        category_list = []
        today = str(datetime.date.today()).replace('-', '')
        browser = webdriver.Chrome(options=options)
        category = ["kfootball", "wfootball", "kbaseball", "wbaseball", "basketball", "volleyball", "golf"]

        for i in range(n[1]):  # 7
            url = f'https://sports.news.naver.com/{category[i]}/news/index?isphoto=N&date={today}&page=1'
            browser.get(url)

            time.sleep(0.5)

            soup = BeautifulSoup(browser.page_source, "html.parser")
            a_tag_list.extend(soup.select(".news_list li>a"))


        for i in range(len(a_tag_list)):
            urls.append("https://sports.news.naver.com/news" + re.search('\?.+', a_tag_list[i]["href"]).group())
            category_list.append("스포츠")

        url_df = pd.DataFrame({'category' : category_list,
                               'url' : urls})
        
        with self.lock:
            self.url_df_list[7] = url_df

        browser.quit()

# 기사 본문 크롤링
class ContentCrawling:
    def __init__(self):
        self.category_names = ["정치", "경제", "사회", "생활/문화", "세계", "IT/과학", "연예", "스포츠"]
        self.title = [[] for _ in range(8)]
        self.content = [[] for _ in range(8)]
        self.img = [[] for _ in range(8)]
        self.lock = threading.Lock()

    def getContent(self, url_list, category_num):  # 정치, 경제, 사회, 생활/문화, 세계, IT/과학
        title_list = []
        content_list = []
        img_list = []
        browser = webdriver.Chrome(options=options)

        for url in tqdm(url_list, desc=f"{self.category_names[category_num]} CONTENT"):
            flag = False
            browser.get(url)
            time.sleep(0.5)
            soup = BeautifulSoup(browser.page_source, "html.parser")

            try:
                title = soup.select("#title_area span")[0]
                content = soup.find_all(attrs={"id" : "dic_area"})
                self.getImg(soup, img_list)
                flag = True

            except IndexError:
                print("삭제된 기사")
                continue
            
            if flag:
                title_list.extend(title)
                content_list.extend(self.removeTag(content))
                
        with self.lock:
            for i in range(len(title_list)):
                try:
                    self.title[category_num].append(title_list[i].text)
                    self.content[category_num].append(cleanContent(content_list[i].text))
                    self.img[category_num].append(img_list[i])
                except IndexError:
                    print(i, category_num)
                    print(content_list[i])
                    print(content[category_num])

        browser.quit()


    def getEntertainmentContent(self, url_list):    # 연예
        title_list = []
        content_list = []
        img_list = []
        browser = webdriver.Chrome(options=options)

        for url in tqdm(url_list, desc="연예 CONTENT"):
            flag = False
            browser.get(url)
            time.sleep(0.5)
            soup = BeautifulSoup(browser.page_source, "html.parser")

            try:
                title = soup.select(".end_tit")
                content = soup.find_all(attrs={"class" : "article_body"})
                self.getImg(soup, img_list)
                flag = True

            except IndexError:
                print("삭제된 기사")
                continue

            if flag:
                title_list.extend(title)
                content_list.extend(self.removeTag(content))
                
        with self.lock:
            for i in range(len(title_list)):
                self.title[6].append(title_list[i].text)
                self.content[6].append(cleanContent(content_list[i].text))
                self.img[6].append(img_list[i])

        browser.quit()

    def getSportsContent(self, url_list):   # 스포츠
        title_list = []
        content_list = []
        img_list = []
        browser = webdriver.Chrome(options=options)

        for url in tqdm(url_list, desc="스포츠 CONTENT"):
            flag = False
            browser.get(url)                                                    
            time.sleep(0.5)
            soup = BeautifulSoup(browser.page_source, "html.parser")

            try:
                title = soup.select(".news_headline .title")
                content = soup.find_all(attrs={"class" : "news_end"})
                self.getImg(soup, img_list)
                flag = True

            except IndexError:
                print("삭제된 기사")
                continue
                
            if flag:
                title_list.extend(title)
                content_list.extend(self.removeTag(content))

        with self.lock:
            for i in range(len(title_list)):
                self.title[7].append(title_list[i].text)
                self.content[7].append(cleanContent(content_list[i].text))
                self.img[7].append(img_list[i])

        browser.quit()

    # 데이터프레임 생성
    def makeDataFrame(self, all_url, category):    # 수집한 데이터를 데이터프레임으로 변환
        
        title, content, img = [], [], []
        for i in self.title:
            title.extend(i)
        for i in self.content:
            content.extend(i)
        for i in self.img:
            img.extend(i)

        data = {"category" : pd.Series(category),
                "title" : pd.Series(title),
                "content" : pd.Series(content),
                "img" : pd.Series(img),
                "url" : pd.Series(all_url)}

        news_df = pd.DataFrame(data)

        news_df.drop(news_df[news_df['content'].isna()].index, inplace=True)

        return news_df
    

    # 이미지 추출
    def getImg(self, soup, img_list):
        img_tag = soup.select(".end_photo_org img")                     # 이미지 가져오기

        if img_tag:                                                     # 이미지 있으면 이미지 주소만 추출해서 리스트로 만든다.
            img_src_list = []
            for img in img_tag:
                if len(img_src_list) <= 10:                             # 최대 이미지 10개
                    if '.gif' not in img['src']:
                        img_src_list.append(img['src'])
            img_list.append(",".join(img_src_list))
        else:
            img_list.append("")


    # 필요없는 태그 삭제
    def removeTag(self, content):

        while content[0].find("strong"): content[0].find("strong").decompose()
        while content[0].find("small"): content[0].find("small").decompose()
        while content[0].find("table"): content[0].find("table").decompose()
        while content[0].find("b"): content[0].find("b").decompose()
        while content[0].find(attrs={"class" : "end_photo_org"}): content[0].find(attrs={"class" : "end_photo_org"}).decompose()        # 본문 이미지에 있는 글자 없애기
        while content[0].find(attrs={"class" : "vod_player_wrap"}): content[0].find(attrs={"class" : "vod_player_wrap"}).decompose()    # 본문 영상에 있는 글자 없애기
        while content[0].find(attrs={"id" : "video_area"}): content[0].find(attrs={"id" : "video_area"}).decompose()                    # 본문 영상 없애기
        while content[0].find(attrs={"name" : "iframe"}): content[0].find(attrs={"name" : "iframe"}).decompose()
        while content[0].find(attrs={"class" : "image"}): content[0].find(attrs={"class" : "image"}).decompose()
        while content[0].find(attrs={"class" : "vod_area"}): content[0].find(attrs={"class" : "vod_area"}).decompose()                  # 본문 영상 없애기

        if content[0].find(attrs={"class" : "artical-btm"}): content[0].find(attrs={"class" : "artical-btm"}).decompose()               # 하단에 제보하기 칸 있으면 삭제
        if content[0].find(attrs={"class" : "caption"}): content[0].find(attrs={"class" : "caption"}).decompose()                       # 이미지 설명 없애기
        if content[0].find(attrs={"class" : "source"}): content[0].find(attrs={"class" : "source"}).decompose()
        if content[0].find(attrs={"class" : "byline"}): content[0].find(attrs={"class" : "byline"}).decompose()
        if content[0].find(attrs={"class" : "reporter_area"}): content[0].find(attrs={"class" : "reporter_area"}).decompose()
        if content[0].find(attrs={"class" : "copyright"}): content[0].find(attrs={"class" : "copyright"}).decompose()
        if content[0].find(attrs={"class" : "categorize"}): content[0].find(attrs={"class" : "categorize"}).decompose()
        if content[0].find(attrs={"class" : "promotion"}): content[0].find(attrs={"class" : "promotion"}).decompose()

        return content
    

# 본문 전처리
def cleanContent(text):

    text = re.sub('\([^)]+\)', '', text)
    text = re.sub('\[[^\]]+\]','',text)
    text = re.sub('([^\s]*\s기자)','',text)
    text = re.sub('([^\s]*\온라인 기자)','',text)
    text = re.sub('([^\s]*\s기상캐스터)','',text)
    text = re.sub('포토','',text)
    text = re.sub('\S+@[a-z.]+','',text)
    text = re.sub('[“”]','"',text)
    text = re.sub('[‘’]','\'',text)
    text = re.sub('\s{2,}',' ',text)
    text = re.sub('다\.(?=(?:[^"]*"[^"]*")*[^"]*$)', '다.\n', text)
    text = re.sub('[\t\xa0]','', text)
    text = re.sub('[ㄱ-ㅎㅏ-ㅣ]+','',text)
    text = re.sub('[=+#/^$@*※&ㆍ!』\\|\[\]\<\>`…》■□ㅁ◆◇▶◀▷◁△▽▲▼○●━]','',text)
        
    return text
    

# url 스레드
def urlThread(url_crawler):

    url_threads = []
    for category_num in range(100, 108):
        if category_num <= 105:
            url_thread = threading.Thread(target=url_crawler.getUrl, args=(category_num,))
        elif category_num == 106:
            url_thread = threading.Thread(target=url_crawler.getEntertainmentUrl)
        else:
            url_thread = threading.Thread(target=url_crawler.getSportsUrl)

        url_threads.append(url_thread)
        url_thread.start()

    for url_thread in url_threads:
        url_thread.join()


# 본문 스레드
def contentThread(url_crawler, content_crawler):

    url_list = []
    all_url_list = np.array([])
    category_list = np.array([])

    for i in range(8):
        url_list.append(list(url_crawler.url_df_list[i]['url']))
        all_url_list = np.append(all_url_list, url_crawler.url_df_list[i]['url'])
        category_list = np.append(category_list, url_crawler.url_df_list[i]['category'])

    content_threads = []
    for i in range(8):
        if i <= 5:
            content_thread = threading.Thread(target=content_crawler.getContent, args=(url_list[i], i))
        elif i == 6:
            content_thread = threading.Thread(target=content_crawler.getEntertainmentContent, args=(url_list[i],))
        else:
            content_thread = threading.Thread(target=content_crawler.getSportsContent, args=(url_list[i],))
        content_threads.append(content_thread)
        content_thread.start()

    for content_thread in content_threads:
        content_thread.join()

    news_df = content_crawler.makeDataFrame(all_url_list, category_list)                      # 본문 데이터프레임 생성

    return news_df


# 크롤링 시작
def startCrawling():
    url_crawler = UrlCrawling()
    content_crawler = ContentCrawling()

    urlThread(url_crawler)

    news_df = contentThread(url_crawler, content_crawler)

    return news_df
Comments